Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-30821][K8S]Handle executor failure with multiple containers #29924

Closed
wants to merge 2 commits into from

Conversation

huskysun
Copy link
Contributor

@huskysun huskysun commented Oct 1, 2020

Handle executor failure with multiple containers

Added a spark property spark.kubernetes.executor.checkAllContainers,
with default being false. When it's true, the executor snapshot will
take all containers in the executor into consideration when deciding
whether the executor is in "Running" state, if the pod restart policy is
"Never". Also, added the new spark property to the doc.

What changes were proposed in this pull request?

Checking of all containers in the executor pod when reporting executor status, if the spark.kubernetes.executor.checkAllContainers property is set to true.

Why are the changes needed?

Currently, a pod remains "running" as long as there is at least one running container. This prevents Spark from noticing when a container has failed in an executor pod with multiple containers. With this change, user can configure the behavior to be different. Namely, if any container in the executor pod has failed, either the executor process or one of its sidecars, the pod is considered to be failed, and it will be rescheduled.

Does this PR introduce any user-facing change?

Yes, new spark property added.
User is now able to choose whether to turn on this feature using the spark.kubernetes.executor.checkAllContainers property.

How was this patch tested?

Unit test was added and all passed.
I tried to run integration test by following the instruction here (section "Testing K8S") and also here, but I wasn't able to run it smoothly as it fails to talk with minikube cluster. Maybe it's because my minikube version is too new (I'm using v1.13.1)...? Since I've been trying it for two days and still can't make it work, I decided to submit this PR and hopefully the Jenkins test will pass.

Added a spark property spark.kubernetes.executor.checkAllContainers,
with default being false. When it's true, the executor snapshot will
take all containers in the executor into consideration when deciding
whether the executor is in "Running" state, if the pod restart policy is
"Never". Also, added the new spark property to the doc.
@huskysun
Copy link
Contributor Author

huskysun commented Oct 1, 2020

This PR is a continuation of the effort for #27568.
@holdenk Please take a look, as you said you're interested in merging this. (Sorry for taking this long to submit the PR 😅 )
cc @gongx

@gongx
Copy link

gongx commented Oct 13, 2020

@holdenk Could you review this PR, please?

Copy link
Contributor

@holdenk holdenk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jenkins ok to test. Sorry for the delay. tentative LGTM pending CI/jenkins.

@@ -59,11 +65,19 @@ object ExecutorPodsSnapshot extends Logging {
case "pending" =>
PodPending(pod)
case "running" =>
PodRunning(pod)
if (shouldCheckAllContainers &&
"Never" == pod.getSpec.getRestartPolicy &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good addition 👍

@huskysun
Copy link
Contributor Author

@holdenk Made the fix about the indentation. Please take a look again. Thanks!

@huskysun
Copy link
Contributor Author

Also is Jenkins triggered? Looks like the bot didn't make any comment to this PR.

@holdenk
Copy link
Contributor

holdenk commented Oct 13, 2020

Jenkins OK to test

@holdenk
Copy link
Contributor

holdenk commented Oct 14, 2020

huh weird. Jenkins test this please.

@holdenk
Copy link
Contributor

holdenk commented Oct 14, 2020

cc @SparkQA
Jenkins OK to test

@holdenk
Copy link
Contributor

holdenk commented Oct 14, 2020

Filed a ticket to @shaneknapp ( https://issues.apache.org/jira/browse/SPARK-33151 ) since it seems like Jenkins might just be stuck.

@gongx
Copy link

gongx commented Oct 14, 2020

@holdenk Thank you for the help. I do see that the Jenkins status is at "Asked to test", but it does not make any progress.

@SparkQA
Copy link

SparkQA commented Oct 14, 2020

Test build #129762 has finished for PR 29924 at commit a4958b9.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 14, 2020

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/34368/

@SparkQA
Copy link

SparkQA commented Oct 14, 2020

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/34368/

Copy link
Contributor

@holdenk holdenk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. If no one has concerns I'll try and merge tomorrow. Thanks for working on this @huskysun :)

@huskysun
Copy link
Contributor Author

Thank you @holdenk!
Also thanks @khogeland for starting on this effort and @gongx for the help on coordination.

@gongx
Copy link

gongx commented Oct 16, 2020

@holdenk Thank you for the review. Please merge it if you get time.

@gongx
Copy link

gongx commented Oct 20, 2020

@holdenk Please help us to merge this change if you have time. Thank you.

@huskysun
Copy link
Contributor Author

Hi @holdenk sorry for keeping bugging you. Any updates on this? Is there any further work needed behind the curtain (like release cadence or something) that prevents this from being merged? Thanks!

@dongjoon-hyun
Copy link
Member

Retest this please.

@SparkQA
Copy link

SparkQA commented Oct 23, 2020

Test build #130215 has finished for PR 29924 at commit a4958b9.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 23, 2020

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/34815/

@SparkQA
Copy link

SparkQA commented Oct 23, 2020

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/34815/

@asfgit asfgit closed this in f659527 Oct 24, 2020
@holdenk
Copy link
Contributor

holdenk commented Oct 24, 2020

Meregd and backported as a fix to branch-3.

@holdenk
Copy link
Contributor

holdenk commented Oct 24, 2020

@huskysun what is your JIRA account handle?

asfgit pushed a commit that referenced this pull request Oct 24, 2020
Handle executor failure with multiple containers

Added a spark property spark.kubernetes.executor.checkAllContainers,
with default being false. When it's true, the executor snapshot will
take all containers in the executor into consideration when deciding
whether the executor is in "Running" state, if the pod restart policy is
"Never". Also, added the new spark property to the doc.

### What changes were proposed in this pull request?

Checking of all containers in the executor pod when reporting executor status, if the `spark.kubernetes.executor.checkAllContainers` property is set to true.

### Why are the changes needed?

Currently, a pod remains "running" as long as there is at least one running container. This prevents Spark from noticing when a container has failed in an executor pod with multiple containers. With this change, user can configure the behavior to be different. Namely, if any container in the executor pod has failed, either the executor process or one of its sidecars, the pod is considered to be failed, and it will be rescheduled.

### Does this PR introduce _any_ user-facing change?

Yes, new spark property added.
User is now able to choose whether to turn on this feature using the `spark.kubernetes.executor.checkAllContainers` property.

### How was this patch tested?

Unit test was added and all passed.
I tried to run integration test by following the instruction [here](https://spark.apache.org/developer-tools.html) (section "Testing K8S") and also [here](https://github.com/apache/spark/blob/master/resource-managers/kubernetes/integration-tests/README.md), but I wasn't able to run it smoothly as it fails to talk with minikube cluster. Maybe it's because my minikube version is too new (I'm using v1.13.1)...? Since I've been trying it for two days and still can't make it work, I decided to submit this PR and hopefully the Jenkins test will pass.

Closes #29924 from huskysun/exec-sidecar-failure.

Authored-by: Shiqi Sun <s.sun@salesforce.com>
Signed-off-by: Holden Karau <hkarau@apple.com>
(cherry picked from commit f659527)
Signed-off-by: Holden Karau <hkarau@apple.com>
@huskysun
Copy link
Contributor Author

huskysun commented Oct 25, 2020 via email

holdenk pushed a commit to holdenk/spark that referenced this pull request Oct 27, 2020
Handle executor failure with multiple containers

Added a spark property spark.kubernetes.executor.checkAllContainers,
with default being false. When it's true, the executor snapshot will
take all containers in the executor into consideration when deciding
whether the executor is in "Running" state, if the pod restart policy is
"Never". Also, added the new spark property to the doc.

### What changes were proposed in this pull request?

Checking of all containers in the executor pod when reporting executor status, if the `spark.kubernetes.executor.checkAllContainers` property is set to true.

### Why are the changes needed?

Currently, a pod remains "running" as long as there is at least one running container. This prevents Spark from noticing when a container has failed in an executor pod with multiple containers. With this change, user can configure the behavior to be different. Namely, if any container in the executor pod has failed, either the executor process or one of its sidecars, the pod is considered to be failed, and it will be rescheduled.

### Does this PR introduce _any_ user-facing change?

Yes, new spark property added.
User is now able to choose whether to turn on this feature using the `spark.kubernetes.executor.checkAllContainers` property.

### How was this patch tested?

Unit test was added and all passed.
I tried to run integration test by following the instruction [here](https://spark.apache.org/developer-tools.html) (section "Testing K8S") and also [here](https://github.com/apache/spark/blob/master/resource-managers/kubernetes/integration-tests/README.md), but I wasn't able to run it smoothly as it fails to talk with minikube cluster. Maybe it's because my minikube version is too new (I'm using v1.13.1)...? Since I've been trying it for two days and still can't make it work, I decided to submit this PR and hopefully the Jenkins test will pass.

Closes apache#29924 from huskysun/exec-sidecar-failure.

Authored-by: Shiqi Sun <s.sun@salesforce.com>
Signed-off-by: Holden Karau <hkarau@apple.com>
(cherry picked from commit f659527)
Signed-off-by: Holden Karau <hkarau@apple.com>
dongjoon-hyun added a commit that referenced this pull request Jul 12, 2024
…rs` by default

### What changes were proposed in this pull request?

This PR aims to enable `spark.kubernetes.executor.checkAllContainers` by default from Apache Spark 4.0.0.

### Why are the changes needed?

Since Apache Spark 3.1.0, `spark.kubernetes.executor.checkAllContainers` is supported and useful because [sidecar pattern](https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/) is used in many cases. Also, it prevents user mistakes which forget and ignore the sidecars' failures by always reporting sidecar failures via executor status.
- #29924

### Does this PR introduce _any_ user-facing change?

- This configuration is no-op when there is no other container.
- This will report user containers' error correctly when there exist other containers which are provided by the users.

### How was this patch tested?

Both `true` and `false` are covered by our CI test coverage since Apache Spark 3.1.0.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #47337 from dongjoon-hyun/SPARK-48887.

Authored-by: Dongjoon Hyun <dhyun@apple.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants